Analysis of Markov Reward Models with Partial Reward Loss Based on a Time Reverse Approach
نویسندگان
چکیده
There are effective numerical methods for the analysis of Markov reward models (MRMs) without or with complete reward loss, but the analysis of MRMs with partial reward loss is more complex. This paper presents the analytical description of the distribution and the moments of the accumulated reward of partial increment loss reward models and an effective numerical method to evaluate these measures. The solution method is based on the time reverse process. The time reverse analysis allows to avoid computationally expensive techniques like introducing a supplementary variable or evaluating numerical integrals. Even though, this approach introduces difficulties at the first sight since the reverse process is inhomogeneous, it turns out that with the introduction of a properly chosen performance measure, homogeneous differential equations characterize the desired reward measures. This set of homogeneous differential equations allows one to apply an iterative scheme similar to the randomization (or Jensen’s) method with all advantages of that method, i.e., numerical stability, pre-calculated error bound.
منابع مشابه
Completion Time in Markov Reward Models with Partial Incremental Loss
The completion time analysis of Markov reward models with partial incremental loss is provided in this paper. The complexity of the model behaviour requires the use of an extra (supplementary) variable.
متن کاملNumerical Analysis of Large Markov Reward Models
First analysis of Markov Reward Models (MRM) resulted in a double transform expression, whose numerical solution is based on the inverse transformations both in time and reward variable domain. Better numerical methods were proposed based on the time domain properties of these models, such as the set of partial differential equation describes the process evolution in time. This paper introduces...
متن کاملOn-o Markov Reward Models
The analysis of Markov Reward Models with preemptive resume policy usually results in a double transform expression, whose solution is based on an inverse transformation, both in the time and in the reward variable domains. This paper discusses the case when the reward rates can be described only by 0 or positive c values. These on-o Markov Reward Models are analyzed and a symbolic solution is ...
متن کاملAggregation Methods for Markov Reward Chains with Fast and Silent Transitions
We analyze derivation of Markov reward chains from intermediate performance models that arise from formalisms for compositional performance analysis like stochastic process algebras, (generalized) stochastic Petri nets, etc. The intermediate models are typically extensions of continuous-time Markov reward chains with instantaneous labeled transitions. We give stochastic meaning to the intermedi...
متن کامل